Table of contents
Show the code
! pip install tensorflow
import pandas as pd
import numpy as np
from lets_plot import *
import matplotlib.pyplot as plt
import xgboost as xgb
from xgboost import XGBClassifier, XGBRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
from sklearn.model_selection import train_test_split
from sklearn.metrics import root_mean_squared_error, r2_score
from sklearn.preprocessing import MinMaxScaler
LetsPlot.setup_html(isolated_frame= True )
Requirement already satisfied: tensorflow in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (2.18.0)
Requirement already satisfied: tensorflow-intel==2.18.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow) (2.18.0)
Requirement already satisfied: absl-py>=1.0.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (2.1.0)
Requirement already satisfied: astunparse>=1.6.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (1.6.3)
Requirement already satisfied: flatbuffers>=24.3.25 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (25.2.10)
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (0.6.0)
Requirement already satisfied: google-pasta>=0.1.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (0.2.0)
Requirement already satisfied: libclang>=13.0.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (18.1.1)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (3.4.0)
Requirement already satisfied: packaging in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (23.2)
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.3 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (5.29.3)
Requirement already satisfied: requests<3,>=2.21.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (2.31.0)
Requirement already satisfied: setuptools in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (75.8.0)
Requirement already satisfied: six>=1.12.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (2.5.0)
Requirement already satisfied: typing-extensions>=3.6.6 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (4.12.2)
Requirement already satisfied: wrapt>=1.11.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (1.17.2)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (1.70.0)
Requirement already satisfied: tensorboard<2.19,>=2.18 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (2.18.0)
Requirement already satisfied: keras>=3.5.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (3.9.0)
Requirement already satisfied: numpy<2.1.0,>=1.26.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (1.26.3)
Requirement already satisfied: h5py>=3.11.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (3.13.0)
Requirement already satisfied: ml-dtypes<0.5.0,>=0.4.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorflow-intel==2.18.0->tensorflow) (0.4.1)
Requirement already satisfied: wheel<1.0,>=0.23.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from astunparse>=1.6.0->tensorflow-intel==2.18.0->tensorflow) (0.45.1)
Requirement already satisfied: rich in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (13.9.4)
Requirement already satisfied: namex in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (0.0.8)
Requirement already satisfied: optree in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (0.14.1)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from requests<3,>=2.21.0->tensorflow-intel==2.18.0->tensorflow) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from requests<3,>=2.21.0->tensorflow-intel==2.18.0->tensorflow) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from requests<3,>=2.21.0->tensorflow-intel==2.18.0->tensorflow) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from requests<3,>=2.21.0->tensorflow-intel==2.18.0->tensorflow) (2023.11.17)
Requirement already satisfied: markdown>=2.6.8 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorboard<2.19,>=2.18->tensorflow-intel==2.18.0->tensorflow) (3.7)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorboard<2.19,>=2.18->tensorflow-intel==2.18.0->tensorflow) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from tensorboard<2.19,>=2.18->tensorflow-intel==2.18.0->tensorflow) (3.1.3)
Requirement already satisfied: MarkupSafe>=2.1.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from werkzeug>=1.0.1->tensorboard<2.19,>=2.18->tensorflow-intel==2.18.0->tensorflow) (3.0.2)
Requirement already satisfied: markdown-it-py>=2.2.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from rich->keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from rich->keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (2.19.1)
Requirement already satisfied: mdurl~=0.1 in c:\users\jadab\appdata\local\programs\python\python312\lib\site-packages (from markdown-it-py>=2.2.0->rich->keras>=3.5.0->tensorflow-intel==2.18.0->tensorflow) (0.1.2)
Elevator pitch
A SHORT (2-3 SENTENCES) PARAGRAPH THAT DESCRIBES KEY INSIGHTS TAKEN FROM METRICS IN THE PROJECT RESULTS THINK TOP OR MOST IMPORTANT RESULTS. (Note: this is not a summary of the project, but a summary of the results.)
I was able to create a machine learning model that used features like if the home is one story and the quality of the building materials to decide whether the home was built before or after 1980 with an F1 score of about 0.94. When I added the additional features from the expanded dataset I got that F1 score up to about 0.97, so it did a lot better.
QUESTION|TASK 1
Create 2-3 charts that evaluate potential relationships between the home variables and before1980. Explain what you learn from the charts that could help a machine learning algorithm.
First, the vast majority of one-story homes were built since 1980. I did some research into why most one-story homes are of newer construction, and it seems to be because in the 1980’s and later is when the Baby Boomer generation started to need more accessible homes, and they didn’t need to be very large (as they were living mostly on their own or just as couples), so a lot more single-story homes were built to accommodate them, as their generation is massive proportionally.
Show the code
df = pd.read_csv("https://raw.githubusercontent.com/byuidatascience/data4dwellings/master/data-raw/dwellings_ml/dwellings_ml.csv" )
Show the code
df_plotting = df.copy()
df_plotting['arcstyle_ONE-STORY' ] = df_plotting['arcstyle_ONE-STORY' ].astype('bool' )
df_plotting['before1980' ] = df_plotting['before1980' ].astype('bool' )
legend_labels = {
'true' : 'Before 1980' ,
'false' : 'Since 1980'
}
(
ggplot(df_plotting, aes(x= "arcstyle_ONE-STORY" ,fill= 'before1980' ))
+ geom_bar(stat= 'count' , position= 'dodge' )
+ scale_fill_manual(
values = ['#e6272f' , '#6d9ade' ],
name = "House Built:" ,
labels = legend_labels
)
+ labs(
title= "One Story Homes Before and After 1980" ,
x= "One Story" ,
y= 'Number of Homes'
)
+ theme(
plot_title= element_text(size= 20 )
)
)
Secondly, most homes without attached garages were built since 1980. I honestly cannot find anything about why this might be happening, but my guess would be that they are becoming less popular for aesthetic reasons.
Show the code
df_plotting['gartype_Att' ] = df_plotting['gartype_Att' ].astype('bool' )
(
ggplot(df_plotting, aes(x= "gartype_Att" ,fill= 'before1980' ))
+ geom_bar(stat= 'count' , position= 'dodge' )
+ scale_fill_manual(
values = ['#e6272f' , '#6d9ade' ],
name = "House Built:" ,
labels = legend_labels
)
+ labs(
title= "Homes with Attached Garages Before and After 1980" ,
x= "Attached Garage" ,
y= 'Number of Homes'
)
+ theme(
plot_title= element_text(size= 20 )
)
)
Thirdly, the data has a “Quality” scale from A-D and X, which basically says how high quality the home is. The majority of the homes in the dataset are in the “C” category, meaning decent quality homes, but not the best. Most of the homes in this C category were built in or after 1980, which could suggest a lot more pre-built homes where the materials used weren’t decided by the homeowner. After 1980 it became a lot cheaper (hence the lower-quality building materials) and more popular to buy a home pre-built rather than build it yourself.
Show the code
df_plotting['quality_C' ] = df_plotting['quality_C' ].astype('bool' )
(
ggplot(df_plotting, aes(x= "quality_C" ,fill= 'before1980' ))
+ geom_bar(stat= 'count' , position= 'dodge' )
+ scale_fill_manual(
values = ['#e6272f' , '#6d9ade' ],
name = "House Built:" ,
labels = legend_labels
)
+ labs(
title= 'Homes of Quality "C" Before and After 1980' ,
x= "C-Quality" ,
y= 'Number of Homes'
)
+ theme(
plot_title= element_text(size= 20 )
)
)
QUESTION|TASK 2
Build a classification model labeling houses as being built “before 1980” or “during or after 1980”. Your goal is to reach or exceed 90% accuracy. Explain your final model choice (algorithm, tuning parameters, etc) and describe what other models you tried.
I went with an XGBClassifier model for this project because I am in the Machine Learning class right now and I remember my teacher in that class saying that XGBoost is often his go-to model if he needs to get things done quickly and well. For parameters I made the ‘objective’ ‘binary:hinge’, which basically tells it that the output should be either a 1 or a 0. The ‘eval_metric’ tells the model what to optimize, and the ‘error’ option is calculated as #(wrong cases)/#(all cases). I didn’t end up trying any other models because the XGBoost model got over 90% accuracy first try.
Show the code
features_in_order_of_importance = ['livearea' ,'basement' ,'netprice' ,'numbaths' ,'smonth' ,'finbsmnt' ,'numbdrm' ,'tasp' ,'deduct' ,'abstrprd' ,'nocars' ,'gartype_Att' ,'sprice' ,'quality_C' ,'status_I' ,'quality_D' ,'condition_AVG' ,'arcstyle_ONE-STORY' ,'arcstyle_MIDDLE UNIT' ,'stories' ,'syear' ,'qualified_Q' ,'gartype_Det' ,'arcstyle_ONE AND HALF-STORY' ,'arcstyle_END UNIT' ,'arcstyle_TWO-STORY' ,'condition_Good' ,'quality_B' ,'quality_A' ,'totunits' ,'condition_VGood' ,'arcstyle_TRI-LEVEL' ,'quality_X' ,'gartype_det/CP' ,'arcstyle_BI-LEVEL' ,'arcstyle_THREE-STORY' ,'condition_Excel' ,'arcstyle_TRI-LEVEL WITH BASEMENT' ,'arcstyle_CONVERSIONS' ,'gartype_CP' ,'arcstyle_TWO AND HALF-STORY' ,'gartype_Att/Det' ,'qualified_U' ,'gartype_None' ,'arcstyle_SPLIT LEVEL' ,'condition_Fair' ,'gartype_att/CP' ,'status_V' ]
X = df[features_in_order_of_importance]
y = df['before1980' ]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2 , random_state= 42 )
model = XGBClassifier(
objective= 'binary:hinge' ,
eval_metric= 'error' ,
use_label_encoder= False
)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
Show the code
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
print ('Accuracy:' , accuracy)
print ('Precision:' , precision)
print ('Recall:' , recall)
print ('F1:' , f1)
Accuracy: 0.9277765655684049
Precision: 0.9364697802197802
Recall: 0.9491820396797772
F1: 0.9427830596369923
QUESTION|TASK 3
Justify your classification model by discussing the most important features selected by your model. This discussion should include a feature importance chart and a description of the features.
I decided to do this part before part one, which is why I graphed the three most important features for that part. But as I already explained in part one, it makes sense that one story buildings would be the most important feature (because of the large population that is aging and needing single story homes). Basically I’m saying see task 1 for an explanation of the most important features.
Show the code
influences = (pd.DataFrame({'importance' : model.feature_importances_,
'feature' : X_train.columns})
.sort_values('importance' )
.query('importance >= 0.02' ))
(
ggplot(data = influences, mapping = aes(x = 'feature' , y = 'importance' ))
+ geom_bar(stat = 'identity' )
+ coord_flip()
+ labs(
x = "Feature" ,
y = "Importance" ,
title = "XGBoost Classifier Feature Importance"
)
+ theme(plot_title= element_text(size= 15 ,hjust=- 4.1 ))
)
Show the code
# importance_scores = model.feature_importances_
# feature_names = X_train.columns if hasattr(X_train, "columns") else np.arange(X_train.shape[1])
# sorted_indices = np.argsort(importance_scores)[::-1]
# for i in sorted_indices:
# print(f"{feature_names[i]}: {importance_scores[i]}")
QUESTION|TASK 4
Describe the quality of your classification model using 2-3 different evaluation metrics. You also need to explain how to interpret each of the evaluation metrics you use.
You can see above (when I ran my model) what the accuracy, precision, recall, and f1 scores are for my model. Here is a quick explanation of each of these metrics:
Accuracy - The ratio of how many correct predictions to how many total predictions were made. This is not always the best metric because it does not tell us what the model is doing well on versus what it is failing on. What I mean is, if the data were 90% one category and 10% another category, the model could just predict the first category every time and have an accuracy score of 90%, which is decent. But the model would still not be very intelligent.
Precision - When a model makes a prediction of a certain category, what percent of the predictions for that category were correct? This is the precision. Having a high precision means the model had few “false positives”. But, focusing on just the precision score might induce a model to be very conservative in it’s “positive” predictions. It might decide to only guess a certain category if it is absolutely positive about it. That way the precision is very high.
Recall - Out of all the actual positives, how many did the model correctly identify? Focusing on recall can have the opposite effect as focusing on precision. Where precision makes the model more hesitant to guess “positive”, recall makes the model more likely to. Because if it guesses “positive” for all of them, the recall would be 100%.
F1 Score - F1 is a strange sort of balancing ratio between precision and recall that forces the model to focus on both rather than one or the other. A high F1 score means that the model performed well on both precision and recall. The F1 score is usually the most accurate to how your model is performing in general cases.
STRETCH QUESTION|TASK 1
Repeat the classification model using 3 different algorithms. Display their Feature Importance, and Decision Matrix. Explain the differences between the models and which one you would recommend to the Client.
type your results and analysis here
Show the code
# DECISION TREE
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(X_train, y_train)
tree_pred = tree.predict(X_test)
tree_accuracy = accuracy_score(y_test, tree_pred)
tree_precision = precision_score(y_test, tree_pred)
tree_recall = recall_score(y_test, tree_pred)
tree_f1 = f1_score(y_test, tree_pred)
print ('Decision Tree' )
print ('Accuracy:' , tree_accuracy)
print ('Precision:' , tree_precision)
print ('Recall:' , tree_recall)
print ('F1:' , tree_f1)
print ('Confusion Matrix:' )
print (confusion_matrix(y_test, tree_pred))
Decision Tree
Accuracy: 0.8989744708706088
Precision: 0.9236990154711674
Recall: 0.9143752175426384
F1: 0.9190134686024138
Confusion Matrix:
[[1493 217]
[ 246 2627]]
Show the code
influences = (pd.DataFrame({'importance' : tree.feature_importances_,
'feature' : X_train.columns})
.sort_values('importance' )
.query('importance >= 0.02' ))
(
ggplot(data = influences, mapping = aes(x = 'feature' , y = 'importance' ))
+ geom_bar(stat = 'identity' )
+ coord_flip()
+ labs(
x = "Feature" ,
y = "Importance" ,
title = "Decision Tree Classifier Feature Importance"
)
+ theme(plot_title= element_text(size= 15 ,hjust=- 1.3 ))
)
Show the code
# RANDOM FOREST
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier()
forest.fit(X_train, y_train)
forest_pred = forest.predict(X_test)
forest_accuracy = accuracy_score(y_test, forest_pred)
forest_precision = precision_score(y_test, forest_pred)
forest_recall = recall_score(y_test, forest_pred)
forest_f1 = f1_score(y_test, forest_pred)
print ('Random Forest' )
print ('Accuracy:' , forest_accuracy)
print ('Precision:' , forest_precision)
print ('Recall:' , forest_recall)
print ('F1:' , forest_f1)
print ('Confusion Matrix:' )
print (confusion_matrix(y_test, forest_pred))
Random Forest
Accuracy: 0.9312677285620773
Precision: 0.9434812760055479
Recall: 0.9470936303515489
F1: 0.945284002084419
Confusion Matrix:
[[1547 163]
[ 152 2721]]
Show the code
influences = (pd.DataFrame({'importance' : forest.feature_importances_,
'feature' : X_train.columns})
.sort_values('importance' )
.query('importance >= 0.02' ))
(
ggplot(data = influences, mapping = aes(x = 'feature' , y = 'importance' ))
+ geom_bar(stat = 'identity' )
+ coord_flip()
+ labs(
x = "Feature" ,
y = "Importance" ,
title = "Random Forest Classifier Feature Importance"
)
+ theme(plot_title= element_text(size= 15 ,hjust=- 1.45 ))
)
Show the code
# NEURAL NETWORK
import tensorflow as tf
from tensorflow import keras
from keras import Input, Model
norm = MinMaxScaler().fit(X_train)
X_train = norm.transform(X_train)
X_test = norm.transform(X_test)
model = Sequential()
model.add(Input(shape= (len (X_train[0 ]),)))
model.add(Dense(16 , activation= 'relu' ))
model.add(Dense(8 , activation= 'relu' ))
model.add(Dense(1 , activation= 'sigmoid' ))
opt = keras.optimizers.Adam()
model.compile (loss= 'binary_crossentropy' , optimizer= opt, metrics= ['accuracy' ])
early_stop = keras.callbacks.EarlyStopping(monitor= 'val_loss' , patience= 10 )
history = model.fit(X_train, y_train, epochs= 2000 , validation_split= .2 , batch_size= 32 , callbacks= [early_stop],shuffle= False )
hist = pd.DataFrame(history.history)
hist = hist.reset_index()
predictions = model.predict(X_test)
binary_predictions = (predictions >= 0.5 ).astype(int )
nn_accuracy = accuracy_score(y_test, binary_predictions)
nn_precision = precision_score(y_test, binary_predictions)
nn_recall = recall_score(y_test, binary_predictions)
nn_f1 = f1_score(y_test, binary_predictions)
print ('Neural Network' )
print ('Accuracy:' , nn_accuracy)
print ('Precision:' , nn_precision)
print ('Recall:' , nn_recall)
print ('F1:' , nn_f1)
print ('Confusion Matrix:' )
print (confusion_matrix(y_test, binary_predictions))
Epoch 1/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 5:27 715ms/step - accuracy: 0.3438 - loss: 0.7439 59/459 ━━━━━━━━━━━━━━━━━━━━ 0s 862us/step - accuracy: 0.5141 - loss: 0.7033 127/459 ━━━━━━━━━━━━━━━━━━━━ 0s 802us/step - accuracy: 0.5897 - loss: 0.6607195/459 ━━━━━━━━━━━━━━━━━━━━ 0s 781us/step - accuracy: 0.6349 - loss: 0.6221264/459 ━━━━━━━━━━━━━━━━━━━━ 0s 770us/step - accuracy: 0.6664 - loss: 0.5907335/459 ━━━━━━━━━━━━━━━━━━━━ 0s 757us/step - accuracy: 0.6906 - loss: 0.5642406/459 ━━━━━━━━━━━━━━━━━━━━ 0s 751us/step - accuracy: 0.7093 - loss: 0.5425459/459 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.7208 - loss: 0.5284 - val_accuracy: 0.8617 - val_loss: 0.3459
Epoch 2/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9375 - loss: 0.1757 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8723 - loss: 0.3156141/459 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.8694 - loss: 0.3200215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8684 - loss: 0.3216288/459 ━━━━━━━━━━━━━━━━━━━━ 0s 709us/step - accuracy: 0.8673 - loss: 0.3230357/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8669 - loss: 0.3233427/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8666 - loss: 0.3231459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 963us/step - accuracy: 0.8666 - loss: 0.3227 - val_accuracy: 0.8661 - val_loss: 0.3319
Epoch 3/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 8s 19ms/step - accuracy: 0.9375 - loss: 0.1534 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8709 - loss: 0.2985142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8709 - loss: 0.3037212/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8709 - loss: 0.3058283/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8704 - loss: 0.3077358/459 ━━━━━━━━━━━━━━━━━━━━ 0s 709us/step - accuracy: 0.8704 - loss: 0.3085426/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8704 - loss: 0.3087459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 963us/step - accuracy: 0.8705 - loss: 0.3085 - val_accuracy: 0.8699 - val_loss: 0.3232
Epoch 4/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1403 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8780 - loss: 0.2889142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8763 - loss: 0.2944213/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8753 - loss: 0.2967285/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8744 - loss: 0.2988353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8741 - loss: 0.2997423/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8739 - loss: 0.3001459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 978us/step - accuracy: 0.8739 - loss: 0.3000 - val_accuracy: 0.8715 - val_loss: 0.3168
Epoch 5/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 0.9688 - loss: 0.1320 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8800 - loss: 0.2824142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8782 - loss: 0.2878207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.8773 - loss: 0.2899275/459 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.8761 - loss: 0.2921343/459 ━━━━━━━━━━━━━━━━━━━━ 0s 739us/step - accuracy: 0.8756 - loss: 0.2932415/459 ━━━━━━━━━━━━━━━━━━━━ 0s 734us/step - accuracy: 0.8754 - loss: 0.2938459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 993us/step - accuracy: 0.8753 - loss: 0.2938 - val_accuracy: 0.8742 - val_loss: 0.3127
Epoch 6/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 0.9688 - loss: 0.1274 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8817 - loss: 0.2755138/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8799 - loss: 0.2811209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8789 - loss: 0.2836279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8779 - loss: 0.2860350/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8774 - loss: 0.2873421/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8772 - loss: 0.2879459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 963us/step - accuracy: 0.8772 - loss: 0.2880 - val_accuracy: 0.8745 - val_loss: 0.3082
Epoch 7/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1235 68/459 ━━━━━━━━━━━━━━━━━━━━ 0s 757us/step - accuracy: 0.8863 - loss: 0.2692139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8832 - loss: 0.2753211/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8816 - loss: 0.2780281/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8803 - loss: 0.2806352/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8797 - loss: 0.2820420/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8794 - loss: 0.2828459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 971us/step - accuracy: 0.8793 - loss: 0.2829 - val_accuracy: 0.8748 - val_loss: 0.3050
Epoch 8/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1200 66/459 ━━━━━━━━━━━━━━━━━━━━ 0s 769us/step - accuracy: 0.8899 - loss: 0.2644135/459 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.8859 - loss: 0.2707206/459 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.8841 - loss: 0.2735273/459 ━━━━━━━━━━━━━━━━━━━━ 0s 743us/step - accuracy: 0.8828 - loss: 0.2762343/459 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.8821 - loss: 0.2778409/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8816 - loss: 0.2788459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 989us/step - accuracy: 0.8815 - loss: 0.2790 - val_accuracy: 0.8759 - val_loss: 0.3019
Epoch 9/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1168 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8899 - loss: 0.2609139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8869 - loss: 0.2671213/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8854 - loss: 0.2701285/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8840 - loss: 0.2730357/459 ━━━━━━━━━━━━━━━━━━━━ 0s 709us/step - accuracy: 0.8833 - loss: 0.2747428/459 ━━━━━━━━━━━━━━━━━━━━ 0s 708us/step - accuracy: 0.8828 - loss: 0.2756459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 959us/step - accuracy: 0.8827 - loss: 0.2758 - val_accuracy: 0.8764 - val_loss: 0.2994
Epoch 10/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 0.9688 - loss: 0.1147 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8916 - loss: 0.2579142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8882 - loss: 0.2642213/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8866 - loss: 0.2670284/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8854 - loss: 0.2699356/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8846 - loss: 0.2717429/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8840 - loss: 0.2728459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - accuracy: 0.8839 - loss: 0.2729 - val_accuracy: 0.8775 - val_loss: 0.2971
Epoch 11/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 0.9688 - loss: 0.1112 73/459 ━━━━━━━━━━━━━━━━━━━━ 0s 708us/step - accuracy: 0.8924 - loss: 0.2556144/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8895 - loss: 0.2617215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8883 - loss: 0.2646285/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8871 - loss: 0.2675355/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8863 - loss: 0.2692427/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8855 - loss: 0.2703459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 974us/step - accuracy: 0.8853 - loss: 0.2705 - val_accuracy: 0.8800 - val_loss: 0.2952
Epoch 12/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1081 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8947 - loss: 0.2533144/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8910 - loss: 0.2595215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8895 - loss: 0.2624285/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8884 - loss: 0.2653353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 719us/step - accuracy: 0.8875 - loss: 0.2671422/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8867 - loss: 0.2682459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 976us/step - accuracy: 0.8865 - loss: 0.2685 - val_accuracy: 0.8816 - val_loss: 0.2931
Epoch 13/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1054 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8961 - loss: 0.2511141/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8919 - loss: 0.2573213/459 ━━━━━━━━━━━━━━━━━━━━ 0s 709us/step - accuracy: 0.8901 - loss: 0.2602284/459 ━━━━━━━━━━━━━━━━━━━━ 0s 708us/step - accuracy: 0.8886 - loss: 0.2633353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8877 - loss: 0.2652425/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8869 - loss: 0.2664459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 973us/step - accuracy: 0.8867 - loss: 0.2666 - val_accuracy: 0.8822 - val_loss: 0.2919
Epoch 14/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1040 65/459 ━━━━━━━━━━━━━━━━━━━━ 0s 786us/step - accuracy: 0.8969 - loss: 0.2486135/459 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.8925 - loss: 0.2554207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8907 - loss: 0.2584275/459 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.8891 - loss: 0.2615349/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8880 - loss: 0.2636424/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8871 - loss: 0.2649459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 967us/step - accuracy: 0.8869 - loss: 0.2651 - val_accuracy: 0.8830 - val_loss: 0.2904
Epoch 15/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 24ms/step - accuracy: 0.9688 - loss: 0.1028 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8956 - loss: 0.2481146/459 ━━━━━━━━━━━━━━━━━━━━ 0s 703us/step - accuracy: 0.8921 - loss: 0.2545215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8905 - loss: 0.2574286/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8890 - loss: 0.2605358/459 ━━━━━━━━━━━━━━━━━━━━ 0s 710us/step - accuracy: 0.8881 - loss: 0.2624429/459 ━━━━━━━━━━━━━━━━━━━━ 0s 711us/step - accuracy: 0.8874 - loss: 0.2636459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 964us/step - accuracy: 0.8872 - loss: 0.2638 - val_accuracy: 0.8838 - val_loss: 0.2891
Epoch 16/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.1003 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8944 - loss: 0.2467138/459 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.8917 - loss: 0.2527209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8905 - loss: 0.2556279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8892 - loss: 0.2589347/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8884 - loss: 0.2609419/459 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.8877 - loss: 0.2622459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 976us/step - accuracy: 0.8875 - loss: 0.2625 - val_accuracy: 0.8843 - val_loss: 0.2884
Epoch 17/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.0994 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8956 - loss: 0.2451142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8927 - loss: 0.2513209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8915 - loss: 0.2541278/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8901 - loss: 0.2573345/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8892 - loss: 0.2594415/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8884 - loss: 0.2607459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - accuracy: 0.8881 - loss: 0.2611 - val_accuracy: 0.8838 - val_loss: 0.2878
Epoch 18/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 0.9688 - loss: 0.0989 67/459 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.8959 - loss: 0.2431137/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8931 - loss: 0.2498207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8919 - loss: 0.2528279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8904 - loss: 0.2562349/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8893 - loss: 0.2584422/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8885 - loss: 0.2597459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 965us/step - accuracy: 0.8882 - loss: 0.2601 - val_accuracy: 0.8849 - val_loss: 0.2866
Epoch 19/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 0.9688 - loss: 0.0982 68/459 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.8964 - loss: 0.2421136/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8936 - loss: 0.2487207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.8923 - loss: 0.2518276/459 ━━━━━━━━━━━━━━━━━━━━ 0s 734us/step - accuracy: 0.8908 - loss: 0.2551348/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8897 - loss: 0.2574420/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8888 - loss: 0.2587459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 975us/step - accuracy: 0.8885 - loss: 0.2591 - val_accuracy: 0.8846 - val_loss: 0.2860
Epoch 20/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 0.9688 - loss: 0.0975 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8947 - loss: 0.2417139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8923 - loss: 0.2482206/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8914 - loss: 0.2511270/459 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.8901 - loss: 0.2542338/459 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.8892 - loss: 0.2564404/459 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.8884 - loss: 0.2578459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8880 - loss: 0.2584 - val_accuracy: 0.8849 - val_loss: 0.2857
Epoch 21/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0963 67/459 ━━━━━━━━━━━━━━━━━━━━ 0s 773us/step - accuracy: 0.8969 - loss: 0.2404133/459 ━━━━━━━━━━━━━━━━━━━━ 0s 765us/step - accuracy: 0.8936 - loss: 0.2471198/459 ━━━━━━━━━━━━━━━━━━━━ 0s 767us/step - accuracy: 0.8923 - loss: 0.2499265/459 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.8907 - loss: 0.2532331/459 ━━━━━━━━━━━━━━━━━━━━ 0s 764us/step - accuracy: 0.8897 - loss: 0.2554400/459 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.8887 - loss: 0.2569459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8883 - loss: 0.2576 - val_accuracy: 0.8843 - val_loss: 0.2852
Epoch 22/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0967 66/459 ━━━━━━━━━━━━━━━━━━━━ 0s 776us/step - accuracy: 0.8985 - loss: 0.2395133/459 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.8952 - loss: 0.2462198/459 ━━━━━━━━━━━━━━━━━━━━ 0s 766us/step - accuracy: 0.8939 - loss: 0.2491265/459 ━━━━━━━━━━━━━━━━━━━━ 0s 762us/step - accuracy: 0.8924 - loss: 0.2524328/459 ━━━━━━━━━━━━━━━━━━━━ 0s 768us/step - accuracy: 0.8914 - loss: 0.2545386/459 ━━━━━━━━━━━━━━━━━━━━ 0s 783us/step - accuracy: 0.8906 - loss: 0.2558444/459 ━━━━━━━━━━━━━━━━━━━━ 0s 794us/step - accuracy: 0.8900 - loss: 0.2567459/459 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.8899 - loss: 0.2568 - val_accuracy: 0.8843 - val_loss: 0.2848
Epoch 23/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 12s 28ms/step - accuracy: 1.0000 - loss: 0.0954 48/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8997 - loss: 0.2358 104/459 ━━━━━━━━━━━━━━━━━━━━ 0s 981us/step - accuracy: 0.8954 - loss: 0.2438151/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8943 - loss: 0.2466 193/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8937 - loss: 0.2482236/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8927 - loss: 0.2505278/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8920 - loss: 0.2522323/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8913 - loss: 0.2537372/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8907 - loss: 0.2549429/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8901 - loss: 0.2558459/459 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.8900 - loss: 0.2561 - val_accuracy: 0.8854 - val_loss: 0.2846
Epoch 24/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 23ms/step - accuracy: 1.0000 - loss: 0.0950 64/459 ━━━━━━━━━━━━━━━━━━━━ 0s 794us/step - accuracy: 0.8984 - loss: 0.2383120/459 ━━━━━━━━━━━━━━━━━━━━ 0s 841us/step - accuracy: 0.8960 - loss: 0.2442180/459 ━━━━━━━━━━━━━━━━━━━━ 0s 844us/step - accuracy: 0.8949 - loss: 0.2471232/459 ━━━━━━━━━━━━━━━━━━━━ 0s 872us/step - accuracy: 0.8937 - loss: 0.2497292/459 ━━━━━━━━━━━━━━━━━━━━ 0s 868us/step - accuracy: 0.8926 - loss: 0.2522353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 859us/step - accuracy: 0.8917 - loss: 0.2539402/459 ━━━━━━━━━━━━━━━━━━━━ 0s 879us/step - accuracy: 0.8911 - loss: 0.2548437/459 ━━━━━━━━━━━━━━━━━━━━ 0s 925us/step - accuracy: 0.8908 - loss: 0.2553459/459 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.8907 - loss: 0.2555 - val_accuracy: 0.8860 - val_loss: 0.2844
Epoch 25/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 17s 38ms/step - accuracy: 1.0000 - loss: 0.0940 46/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8989 - loss: 0.2346 99/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8956 - loss: 0.2423153/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8946 - loss: 0.2454190/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8941 - loss: 0.2467230/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8933 - loss: 0.2488279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8924 - loss: 0.2509338/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8916 - loss: 0.2528395/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8910 - loss: 0.2540452/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8906 - loss: 0.2547459/459 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.8906 - loss: 0.2547 - val_accuracy: 0.8860 - val_loss: 0.2844
Epoch 26/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0930 67/459 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.8965 - loss: 0.2376132/459 ━━━━━━━━━━━━━━━━━━━━ 0s 771us/step - accuracy: 0.8950 - loss: 0.2437194/459 ━━━━━━━━━━━━━━━━━━━━ 0s 782us/step - accuracy: 0.8941 - loss: 0.2463261/459 ━━━━━━━━━━━━━━━━━━━━ 0s 777us/step - accuracy: 0.8928 - loss: 0.2496327/459 ━━━━━━━━━━━━━━━━━━━━ 0s 773us/step - accuracy: 0.8919 - loss: 0.2519391/459 ━━━━━━━━━━━━━━━━━━━━ 0s 774us/step - accuracy: 0.8913 - loss: 0.2533456/459 ━━━━━━━━━━━━━━━━━━━━ 0s 774us/step - accuracy: 0.8908 - loss: 0.2541459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8908 - loss: 0.2541 - val_accuracy: 0.8863 - val_loss: 0.2840
Epoch 27/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0914 68/459 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.8954 - loss: 0.2367135/459 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.8934 - loss: 0.2431202/459 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.8925 - loss: 0.2459272/459 ━━━━━━━━━━━━━━━━━━━━ 0s 743us/step - accuracy: 0.8913 - loss: 0.2494343/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.8906 - loss: 0.2516414/459 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.8901 - loss: 0.2530459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 991us/step - accuracy: 0.8899 - loss: 0.2534 - val_accuracy: 0.8863 - val_loss: 0.2842
Epoch 28/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0904 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8971 - loss: 0.2364140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 719us/step - accuracy: 0.8946 - loss: 0.2429211/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8932 - loss: 0.2459283/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8920 - loss: 0.2493351/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8913 - loss: 0.2513421/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8907 - loss: 0.2525459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 961us/step - accuracy: 0.8906 - loss: 0.2529 - val_accuracy: 0.8865 - val_loss: 0.2842
Epoch 29/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0887 66/459 ━━━━━━━━━━━━━━━━━━━━ 0s 769us/step - accuracy: 0.8953 - loss: 0.2353132/459 ━━━━━━━━━━━━━━━━━━━━ 0s 763us/step - accuracy: 0.8939 - loss: 0.2419204/459 ━━━━━━━━━━━━━━━━━━━━ 0s 744us/step - accuracy: 0.8931 - loss: 0.2450276/459 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.8922 - loss: 0.2485349/459 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.8915 - loss: 0.2506422/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8911 - loss: 0.2520459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 977us/step - accuracy: 0.8910 - loss: 0.2523 - val_accuracy: 0.8873 - val_loss: 0.2840
Epoch 30/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0873 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 750us/step - accuracy: 0.8958 - loss: 0.2350138/459 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.8945 - loss: 0.2415208/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8934 - loss: 0.2445277/459 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.8924 - loss: 0.2479345/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8918 - loss: 0.2499416/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8913 - loss: 0.2512459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 984us/step - accuracy: 0.8913 - loss: 0.2516 - val_accuracy: 0.8868 - val_loss: 0.2838
Epoch 31/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 1.0000 - loss: 0.0869 65/459 ━━━━━━━━━━━━━━━━━━━━ 0s 789us/step - accuracy: 0.8967 - loss: 0.2338132/459 ━━━━━━━━━━━━━━━━━━━━ 0s 767us/step - accuracy: 0.8955 - loss: 0.2406201/459 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.8943 - loss: 0.2436269/459 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.8931 - loss: 0.2470339/459 ━━━━━━━━━━━━━━━━━━━━ 0s 744us/step - accuracy: 0.8923 - loss: 0.2492412/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.8918 - loss: 0.2506459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 987us/step - accuracy: 0.8917 - loss: 0.2510 - val_accuracy: 0.8871 - val_loss: 0.2832
Epoch 32/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0850 64/459 ━━━━━━━━━━━━━━━━━━━━ 0s 794us/step - accuracy: 0.8990 - loss: 0.2330131/459 ━━━━━━━━━━━━━━━━━━━━ 0s 773us/step - accuracy: 0.8976 - loss: 0.2399202/459 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.8962 - loss: 0.2430273/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8948 - loss: 0.2465342/459 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.8938 - loss: 0.2486409/459 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.8932 - loss: 0.2499459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 982us/step - accuracy: 0.8930 - loss: 0.2504 - val_accuracy: 0.8879 - val_loss: 0.2839
Epoch 33/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 8s 19ms/step - accuracy: 1.0000 - loss: 0.0823 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 719us/step - accuracy: 0.9023 - loss: 0.2330141/459 ━━━━━━━━━━━━━━━━━━━━ 0s 719us/step - accuracy: 0.9000 - loss: 0.2395209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8981 - loss: 0.2424282/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8964 - loss: 0.2460353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8953 - loss: 0.2481424/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8945 - loss: 0.2493459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 970us/step - accuracy: 0.8943 - loss: 0.2496 - val_accuracy: 0.8865 - val_loss: 0.2840
Epoch 34/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0815 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 739us/step - accuracy: 0.8999 - loss: 0.2324142/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8983 - loss: 0.2392214/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8968 - loss: 0.2424284/459 ━━━━━━━━━━━━━━━━━━━━ 0s 717us/step - accuracy: 0.8954 - loss: 0.2457352/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8945 - loss: 0.2477423/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8938 - loss: 0.2489459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 969us/step - accuracy: 0.8936 - loss: 0.2492 - val_accuracy: 0.8879 - val_loss: 0.2836
Epoch 35/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 1.0000 - loss: 0.0811 73/459 ━━━━━━━━━━━━━━━━━━━━ 0s 708us/step - accuracy: 0.9004 - loss: 0.2323144/459 ━━━━━━━━━━━━━━━━━━━━ 0s 713us/step - accuracy: 0.8987 - loss: 0.2386215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 710us/step - accuracy: 0.8971 - loss: 0.2419284/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8957 - loss: 0.2452355/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8947 - loss: 0.2473425/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8940 - loss: 0.2484459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - accuracy: 0.8939 - loss: 0.2487 - val_accuracy: 0.8879 - val_loss: 0.2835
Epoch 36/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0805 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.9015 - loss: 0.2309139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8993 - loss: 0.2378207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 734us/step - accuracy: 0.8975 - loss: 0.2409271/459 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.8960 - loss: 0.2442336/459 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.8950 - loss: 0.2463405/459 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.8943 - loss: 0.2477459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8940 - loss: 0.2483 - val_accuracy: 0.8879 - val_loss: 0.2832
Epoch 37/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0798 72/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.9016 - loss: 0.2310136/459 ━━━━━━━━━━━━━━━━━━━━ 0s 755us/step - accuracy: 0.8998 - loss: 0.2372201/459 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.8983 - loss: 0.2401270/459 ━━━━━━━━━━━━━━━━━━━━ 0s 751us/step - accuracy: 0.8967 - loss: 0.2437339/459 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.8957 - loss: 0.2460409/459 ━━━━━━━━━━━━━━━━━━━━ 0s 742us/step - accuracy: 0.8950 - loss: 0.2473459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 986us/step - accuracy: 0.8947 - loss: 0.2478 - val_accuracy: 0.8882 - val_loss: 0.2824
Epoch 38/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0791 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8988 - loss: 0.2308140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8978 - loss: 0.2374208/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8968 - loss: 0.2403278/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8955 - loss: 0.2438345/459 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.8947 - loss: 0.2459414/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8942 - loss: 0.2471459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 988us/step - accuracy: 0.8940 - loss: 0.2475 - val_accuracy: 0.8876 - val_loss: 0.2823
Epoch 39/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 1.0000 - loss: 0.0783 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8989 - loss: 0.2296140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8978 - loss: 0.2367213/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8966 - loss: 0.2401283/459 ━━━━━━━━━━━━━━━━━━━━ 0s 716us/step - accuracy: 0.8954 - loss: 0.2435352/459 ━━━━━━━━━━━━━━━━━━━━ 0s 719us/step - accuracy: 0.8946 - loss: 0.2456421/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8941 - loss: 0.2468459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 972us/step - accuracy: 0.8940 - loss: 0.2471 - val_accuracy: 0.8871 - val_loss: 0.2819
Epoch 40/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0791 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.8999 - loss: 0.2290140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8984 - loss: 0.2362208/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8971 - loss: 0.2393280/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8957 - loss: 0.2430350/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8949 - loss: 0.2451415/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.8944 - loss: 0.2462459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 996us/step - accuracy: 0.8943 - loss: 0.2466 - val_accuracy: 0.8873 - val_loss: 0.2815
Epoch 41/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 1.0000 - loss: 0.0790 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8988 - loss: 0.2284134/459 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.8979 - loss: 0.2352207/459 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.8971 - loss: 0.2386275/459 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.8959 - loss: 0.2422346/459 ━━━━━━━━━━━━━━━━━━━━ 0s 734us/step - accuracy: 0.8951 - loss: 0.2444417/459 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.8947 - loss: 0.2456459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 983us/step - accuracy: 0.8946 - loss: 0.2460 - val_accuracy: 0.8882 - val_loss: 0.2817
Epoch 42/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0783 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.8992 - loss: 0.2278140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8980 - loss: 0.2349209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8972 - loss: 0.2382278/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8962 - loss: 0.2417348/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8955 - loss: 0.2439418/459 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.8951 - loss: 0.2451459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 981us/step - accuracy: 0.8950 - loss: 0.2455 - val_accuracy: 0.8873 - val_loss: 0.2817
Epoch 43/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0780 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8970 - loss: 0.2271140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8965 - loss: 0.2343209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8962 - loss: 0.2377279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.8954 - loss: 0.2413347/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8948 - loss: 0.2434412/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.8945 - loss: 0.2446459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8945 - loss: 0.2450 - val_accuracy: 0.8890 - val_loss: 0.2815
Epoch 44/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0778 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8984 - loss: 0.2264138/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.8976 - loss: 0.2337211/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8969 - loss: 0.2373281/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8960 - loss: 0.2409347/459 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.8955 - loss: 0.2429417/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8951 - loss: 0.2442459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 983us/step - accuracy: 0.8951 - loss: 0.2446 - val_accuracy: 0.8884 - val_loss: 0.2819
Epoch 45/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 10s 22ms/step - accuracy: 1.0000 - loss: 0.0780 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8986 - loss: 0.2259139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8978 - loss: 0.2333209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8972 - loss: 0.2368275/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8964 - loss: 0.2403345/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8958 - loss: 0.2425413/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8955 - loss: 0.2437459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 985us/step - accuracy: 0.8954 - loss: 0.2441 - val_accuracy: 0.8882 - val_loss: 0.2820
Epoch 46/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0783 71/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.9000 - loss: 0.2257140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8988 - loss: 0.2329212/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8979 - loss: 0.2365283/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8969 - loss: 0.2401354/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8962 - loss: 0.2422423/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8958 - loss: 0.2434459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 965us/step - accuracy: 0.8958 - loss: 0.2437 - val_accuracy: 0.8879 - val_loss: 0.2821
Epoch 47/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0780 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.9000 - loss: 0.2249140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.8990 - loss: 0.2326212/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8982 - loss: 0.2362282/459 ━━━━━━━━━━━━━━━━━━━━ 0s 721us/step - accuracy: 0.8971 - loss: 0.2398353/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8964 - loss: 0.2419425/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8960 - loss: 0.2431459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 969us/step - accuracy: 0.8959 - loss: 0.2434 - val_accuracy: 0.8876 - val_loss: 0.2819
Epoch 48/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0785 73/459 ━━━━━━━━━━━━━━━━━━━━ 0s 708us/step - accuracy: 0.9001 - loss: 0.2252143/459 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.8996 - loss: 0.2323215/459 ━━━━━━━━━━━━━━━━━━━━ 0s 710us/step - accuracy: 0.8987 - loss: 0.2360285/459 ━━━━━━━━━━━━━━━━━━━━ 0s 714us/step - accuracy: 0.8976 - loss: 0.2395357/459 ━━━━━━━━━━━━━━━━━━━━ 0s 712us/step - accuracy: 0.8969 - loss: 0.2416425/459 ━━━━━━━━━━━━━━━━━━━━ 0s 718us/step - accuracy: 0.8964 - loss: 0.2427459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 969us/step - accuracy: 0.8963 - loss: 0.2430 - val_accuracy: 0.8879 - val_loss: 0.2820
Epoch 49/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0783 68/459 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.8997 - loss: 0.2236136/459 ━━━━━━━━━━━━━━━━━━━━ 0s 756us/step - accuracy: 0.8992 - loss: 0.2315203/459 ━━━━━━━━━━━━━━━━━━━━ 0s 756us/step - accuracy: 0.8986 - loss: 0.2349273/459 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.8975 - loss: 0.2386343/459 ━━━━━━━━━━━━━━━━━━━━ 0s 742us/step - accuracy: 0.8967 - loss: 0.2409414/459 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.8962 - loss: 0.2422459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 983us/step - accuracy: 0.8961 - loss: 0.2426 - val_accuracy: 0.8884 - val_loss: 0.2821
Epoch 50/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0785 67/459 ━━━━━━━━━━━━━━━━━━━━ 0s 767us/step - accuracy: 0.9011 - loss: 0.2232139/459 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.9005 - loss: 0.2314208/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.8996 - loss: 0.2349279/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.8983 - loss: 0.2386350/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8975 - loss: 0.2408420/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8970 - loss: 0.2420459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 987us/step - accuracy: 0.8969 - loss: 0.2423 - val_accuracy: 0.8879 - val_loss: 0.2820
Epoch 51/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0788 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8984 - loss: 0.2238137/459 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.8987 - loss: 0.2312204/459 ━━━━━━━━━━━━━━━━━━━━ 0s 741us/step - accuracy: 0.8983 - loss: 0.2345276/459 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.8973 - loss: 0.2383348/459 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.8967 - loss: 0.2405420/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8963 - loss: 0.2417459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 967us/step - accuracy: 0.8962 - loss: 0.2420 - val_accuracy: 0.8868 - val_loss: 0.2822
Epoch 52/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 20ms/step - accuracy: 1.0000 - loss: 0.0789 70/459 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.9002 - loss: 0.2234138/459 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.9003 - loss: 0.2310209/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8996 - loss: 0.2346282/459 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.8984 - loss: 0.2383348/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8977 - loss: 0.2403418/459 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.8972 - loss: 0.2415459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 976us/step - accuracy: 0.8971 - loss: 0.2418 - val_accuracy: 0.8860 - val_loss: 0.2818
Epoch 53/2000
1/459 ━━━━━━━━━━━━━━━━━━━━ 9s 21ms/step - accuracy: 1.0000 - loss: 0.0783 69/459 ━━━━━━━━━━━━━━━━━━━━ 0s 750us/step - accuracy: 0.9004 - loss: 0.2228140/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.9003 - loss: 0.2308210/459 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.8995 - loss: 0.2344281/459 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.8984 - loss: 0.2379348/459 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.8976 - loss: 0.2399419/459 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.8972 - loss: 0.2411459/459 ━━━━━━━━━━━━━━━━━━━━ 0s 981us/step - accuracy: 0.8971 - loss: 0.2414 - val_accuracy: 0.8863 - val_loss: 0.2821
1/144 ━━━━━━━━━━━━━━━━━━━━ 4s 33ms/step125/144 ━━━━━━━━━━━━━━━━━━━━ 0s 404us/step144/144 ━━━━━━━━━━━━━━━━━━━━ 0s 619us/step
Neural Network
Accuracy: 0.8974470870608772
Precision: 0.9085345120707242
Recall: 0.9300382875043508
F1: 0.9191606467148263
Confusion Matrix:
[[1441 269]
[ 201 2672]]
It doesn’t work the same way to create a graph of feature importance for the NN. I am not sure if there is a way to do this or if it even makes sense to try to.
STRETCH QUESTION|TASK 2
Join the dwellings_neighborhoods_ml.csv data to the dwelling_ml.csv on the parcel column to create a new dataset. Duplicate the code for the stretch question above and update it to use this data. Explain the differences and if this changes the model you recommend to the Client.
The additional features made a big difference in the performance of all the different types of models. I added an additional set of scores below for each model: how much better the model performed than their previous versions (the same model without the additional features). This shows that across all the models adding more features helped a lot. I would recommend the Random Forest to the Client because of the models I tested, it performed the best on the evaluation metrics explained in task 4.
Show the code
df_neiborhoods = pd.read_csv("https://raw.githubusercontent.com/byuidatascience/data4dwellings/master/data-raw/dwellings_neighborhoods_ml/dwellings_neighborhoods_ml.csv" )
large_df = pd.merge(df, df_neiborhoods, on= "parcel" )
large_X = large_df.drop(columns = ['parcel' , 'yrbuilt' , 'before1980' ])
large_y = large_df.before1980
X_large_train, X_large_test, y_large_train, y_large_test = train_test_split(large_X, large_y, test_size= 0.2 , random_state= 42 )
Show the code
# XGBoost
model_large = XGBClassifier(
objective= 'binary:hinge' ,
eval_metric= 'error' ,
use_label_encoder= False
)
model_large.fit(X_large_train, y_large_train)
y_large_pred = model_large.predict(X_large_test)
accuracy_large = accuracy_score(y_large_test, y_large_pred)
precision_large = precision_score(y_large_test, y_large_pred)
recall_large = recall_score(y_large_test, y_large_pred)
f1_large = f1_score(y_large_test, y_large_pred)
print ('Large Dataset XGBoost' )
print ('Accuracy:' , accuracy_large)
print ('Precision:' , precision_large)
print ('Recall:' , recall_large)
print ('F1:' , f1_large)
print ()
print ("Difference of scores before and after adding data:" )
print ('Accuracy Increase:' , accuracy_large - accuracy)
print ('Precision Increase:' , precision_large - precision)
print ('Recall Increase:' , recall_large - recall)
print ('F1 Increase:' , f1_large - f1)
Large Dataset XGBoost
Accuracy: 0.9672805292329698
Precision: 0.9676958261863923
Recall: 0.9797395079594791
F1: 0.9736804257155185
Difference of scores before and after adding data:
Accuracy Increase: 0.03950396366456488
Precision Increase: 0.031226045966612048
Recall Increase: 0.03055746827970185
F1 Increase: 0.030897366078526223
Show the code
# DECISION TREE
large_tree = DecisionTreeClassifier()
large_tree.fit(X_large_train, y_large_train)
large_tree_pred = large_tree.predict(X_large_test)
large_tree_accuracy = accuracy_score(y_large_test, large_tree_pred)
large_tree_precision = precision_score(y_large_test, large_tree_pred)
large_tree_recall = recall_score(y_large_test, large_tree_pred)
large_tree_f1 = f1_score(y_large_test, large_tree_pred)
print ('Large Dataset Decision Tree' )
print ('Accuracy:' , large_tree_accuracy)
print ('Precision:' , large_tree_precision)
print ('Recall:' , large_tree_recall)
print ('F1:' , large_tree_f1)
print ()
print ("Difference of scores before and after adding data:" )
print ('Accuracy Increase:' , large_tree_accuracy - tree_accuracy)
print ('Precision Increase:' , large_tree_precision - tree_precision)
print ('Recall Increase:' , large_tree_recall - tree_recall)
print ('F1 Increase:' , large_tree_f1 - tree_f1)
Large Dataset Decision Tree
Accuracy: 0.9604863221884499
Precision: 0.9651898734177216
Recall: 0.9710564399421129
F1: 0.96811426922522
Difference of scores before and after adding data:
Accuracy Increase: 0.06151185131784109
Precision Increase: 0.04149085794655416
Recall Increase: 0.056681222399474485
F1 Increase: 0.049100800622806196
Show the code
# RANDOM FOREST
large_forest = RandomForestClassifier()
large_forest.fit(X_large_train, y_large_train)
large_forest_pred = large_forest.predict(X_large_test)
large_forest_accuracy = accuracy_score(y_large_test, large_forest_pred)
large_forest_precision = precision_score(y_large_test, large_forest_pred)
large_forest_recall = recall_score(y_large_test, large_forest_pred)
large_forest_f1 = f1_score(y_large_test, large_forest_pred)
print ('Large Dataset Random Forest' )
print ('Accuracy:' , large_forest_accuracy)
print ('Precision:' , large_forest_precision)
print ('Recall:' , large_forest_recall)
print ('F1:' , large_forest_f1)
print ()
print ("Difference of scores before and after adding data:" )
print ('Accuracy Increase:' , large_forest_accuracy - forest_accuracy)
print ('Precision Increase:' , large_forest_precision - forest_precision)
print ('Recall Increase:' , large_forest_recall - forest_recall)
print ('F1 Increase:' , large_forest_f1 - forest_f1)
Large Dataset Random Forest
Accuracy: 0.9685320936885392
Precision: 0.9750796870472327
Recall: 0.9739507959479016
F1: 0.9745149145670432
Difference of scores before and after adding data:
Accuracy Increase: 0.03726436512646192
Precision Increase: 0.031598411041684815
Recall Increase: 0.02685716559635265
F1 Increase: 0.029230912482624216
Show the code
# NEURAL NETWORK
large_norm = MinMaxScaler().fit(X_large_train)
X_large_train = large_norm.transform(X_large_train)
X_large_test = large_norm.transform(X_large_test)
model = Sequential()
model.add(Input(shape= (len (X_large_train[0 ]),)))
model.add(Dense(16 , activation= 'relu' ))
model.add(Dense(8 , activation= 'relu' ))
model.add(Dense(1 , activation= 'sigmoid' ))
opt = keras.optimizers.Adam()
model.compile (loss= 'binary_crossentropy' , optimizer= opt, metrics= ['accuracy' ])
early_stop = keras.callbacks.EarlyStopping(monitor= 'val_loss' , patience= 10 )
history = model.fit(X_large_train, y_large_train, epochs= 2000 , validation_split= .2 , batch_size= 32 , callbacks= [early_stop],shuffle= False )
hist = pd.DataFrame(history.history)
hist = hist.reset_index()
predictions = model.predict(X_large_test)
binary_predictions = (predictions >= 0.5 ).astype(int )
large_nn_accuracy = accuracy_score(y_large_test, binary_predictions)
large_nn_precision = precision_score(y_large_test, binary_predictions)
large_nn_recall = recall_score(y_large_test, binary_predictions)
large_nn_f1 = f1_score(y_large_test, binary_predictions)
print ('Large Dataset Neural Network' )
print ('Accuracy:' , large_nn_accuracy)
print ('Precision:' , large_nn_precision)
print ('Recall:' , large_nn_recall)
print ('F1:' , large_nn_f1)
print ()
print ("Difference of scores before and after adding data:" )
print ('Accuracy Increase:' , large_nn_accuracy - nn_accuracy)
print ('Precision Increase:' , large_nn_precision - nn_precision)
print ('Recall Increase:' , large_nn_recall - nn_recall)
print ('F1 Increase:' , large_nn_f1 - nn_f1)
Epoch 1/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 5:13 561ms/step - accuracy: 0.4375 - loss: 0.6957 62/560 ━━━━━━━━━━━━━━━━━━━━ 0s 822us/step - accuracy: 0.5996 - loss: 0.6764 126/560 ━━━━━━━━━━━━━━━━━━━━ 0s 809us/step - accuracy: 0.6740 - loss: 0.6358194/560 ━━━━━━━━━━━━━━━━━━━━ 0s 783us/step - accuracy: 0.7198 - loss: 0.5880260/560 ━━━━━━━━━━━━━━━━━━━━ 0s 777us/step - accuracy: 0.7491 - loss: 0.5476328/560 ━━━━━━━━━━━━━━━━━━━━ 0s 768us/step - accuracy: 0.7705 - loss: 0.5137398/560 ━━━━━━━━━━━━━━━━━━━━ 0s 762us/step - accuracy: 0.7871 - loss: 0.4853468/560 ━━━━━━━━━━━━━━━━━━━━ 0s 756us/step - accuracy: 0.8001 - loss: 0.4618536/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.8104 - loss: 0.4422560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.8138 - loss: 0.4356 - val_accuracy: 0.9376 - val_loss: 0.1669
Epoch 2/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 12s 23ms/step - accuracy: 0.9375 - loss: 0.1514 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9289 - loss: 0.1718138/560 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.9340 - loss: 0.1649209/560 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.9362 - loss: 0.1610280/560 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.9378 - loss: 0.1581351/560 ━━━━━━━━━━━━━━━━━━━━ 0s 723us/step - accuracy: 0.9387 - loss: 0.1564422/560 ━━━━━━━━━━━━━━━━━━━━ 0s 722us/step - accuracy: 0.9392 - loss: 0.1553489/560 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.9396 - loss: 0.1547559/560 ━━━━━━━━━━━━━━━━━━━━ 0s 726us/step - accuracy: 0.9399 - loss: 0.1539560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 960us/step - accuracy: 0.9399 - loss: 0.1538 - val_accuracy: 0.9466 - val_loss: 0.1433
Epoch 3/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 12s 23ms/step - accuracy: 0.9688 - loss: 0.1131 67/560 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.9441 - loss: 0.1456135/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9473 - loss: 0.1403203/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9489 - loss: 0.1372271/560 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.9501 - loss: 0.1349336/560 ━━━━━━━━━━━━━━━━━━━━ 0s 756us/step - accuracy: 0.9506 - loss: 0.1337403/560 ━━━━━━━━━━━━━━━━━━━━ 0s 755us/step - accuracy: 0.9509 - loss: 0.1330471/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9509 - loss: 0.1328533/560 ━━━━━━━━━━━━━━━━━━━━ 0s 762us/step - accuracy: 0.9509 - loss: 0.1326560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.9509 - loss: 0.1325 - val_accuracy: 0.9499 - val_loss: 0.1362
Epoch 4/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 1.0000 - loss: 0.0902 70/560 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.9535 - loss: 0.1343138/560 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.9553 - loss: 0.1298206/560 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.9560 - loss: 0.1272274/560 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.9565 - loss: 0.1251344/560 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.9567 - loss: 0.1238412/560 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.9565 - loss: 0.1232482/560 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.9562 - loss: 0.1232553/560 ━━━━━━━━━━━━━━━━━━━━ 0s 731us/step - accuracy: 0.9559 - loss: 0.1231560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 961us/step - accuracy: 0.9559 - loss: 0.1231 - val_accuracy: 0.9513 - val_loss: 0.1323
Epoch 5/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 12s 21ms/step - accuracy: 1.0000 - loss: 0.0793 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9578 - loss: 0.1283133/560 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.9588 - loss: 0.1242201/560 ━━━━━━━━━━━━━━━━━━━━ 0s 757us/step - accuracy: 0.9592 - loss: 0.1216270/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9596 - loss: 0.1193337/560 ━━━━━━━━━━━━━━━━━━━━ 0s 750us/step - accuracy: 0.9597 - loss: 0.1180404/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9594 - loss: 0.1174471/560 ━━━━━━━━━━━━━━━━━━━━ 0s 751us/step - accuracy: 0.9590 - loss: 0.1173540/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9587 - loss: 0.1172560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 979us/step - accuracy: 0.9586 - loss: 0.1172 - val_accuracy: 0.9533 - val_loss: 0.1291
Epoch 6/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 12s 22ms/step - accuracy: 1.0000 - loss: 0.0746 67/560 ━━━━━━━━━━━━━━━━━━━━ 0s 764us/step - accuracy: 0.9563 - loss: 0.1237133/560 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.9578 - loss: 0.1197201/560 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.9585 - loss: 0.1171267/560 ━━━━━━━━━━━━━━━━━━━━ 0s 757us/step - accuracy: 0.9591 - loss: 0.1150335/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9594 - loss: 0.1135399/560 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.9593 - loss: 0.1128465/560 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.9591 - loss: 0.1127532/560 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.9589 - loss: 0.1126560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 985us/step - accuracy: 0.9588 - loss: 0.1126 - val_accuracy: 0.9544 - val_loss: 0.1267
Epoch 7/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 1.0000 - loss: 0.0705 65/560 ━━━━━━━━━━━━━━━━━━━━ 0s 797us/step - accuracy: 0.9566 - loss: 0.1197126/560 ━━━━━━━━━━━━━━━━━━━━ 0s 808us/step - accuracy: 0.9581 - loss: 0.1162193/560 ━━━━━━━━━━━━━━━━━━━━ 0s 788us/step - accuracy: 0.9589 - loss: 0.1136262/560 ━━━━━━━━━━━━━━━━━━━━ 0s 772us/step - accuracy: 0.9597 - loss: 0.1113331/560 ━━━━━━━━━━━━━━━━━━━━ 0s 765us/step - accuracy: 0.9602 - loss: 0.1097399/560 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.9602 - loss: 0.1089466/560 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.9600 - loss: 0.1088534/560 ━━━━━━━━━━━━━━━━━━━━ 0s 757us/step - accuracy: 0.9599 - loss: 0.1087560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 986us/step - accuracy: 0.9598 - loss: 0.1087 - val_accuracy: 0.9555 - val_loss: 0.1253
Epoch 8/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 1.0000 - loss: 0.0682 67/560 ━━━━━━━━━━━━━━━━━━━━ 0s 758us/step - accuracy: 0.9558 - loss: 0.1163137/560 ━━━━━━━━━━━━━━━━━━━━ 0s 743us/step - accuracy: 0.9581 - loss: 0.1126207/560 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.9593 - loss: 0.1100275/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9602 - loss: 0.1078346/560 ━━━━━━━━━━━━━━━━━━━━ 0s 733us/step - accuracy: 0.9607 - loss: 0.1064412/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9608 - loss: 0.1057479/560 ━━━━━━━━━━━━━━━━━━━━ 0s 739us/step - accuracy: 0.9606 - loss: 0.1056546/560 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.9606 - loss: 0.1055560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 975us/step - accuracy: 0.9606 - loss: 0.1055 - val_accuracy: 0.9575 - val_loss: 0.1241
Epoch 9/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 1.0000 - loss: 0.0672 69/560 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.9570 - loss: 0.1135136/560 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.9590 - loss: 0.1100202/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9602 - loss: 0.1075272/560 ━━━━━━━━━━━━━━━━━━━━ 0s 744us/step - accuracy: 0.9613 - loss: 0.1052342/560 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.9618 - loss: 0.1037411/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9619 - loss: 0.1029481/560 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.9617 - loss: 0.1028549/560 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.9616 - loss: 0.1026560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 966us/step - accuracy: 0.9616 - loss: 0.1026 - val_accuracy: 0.9571 - val_loss: 0.1233
Epoch 10/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 1.0000 - loss: 0.0659 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9579 - loss: 0.1107139/560 ━━━━━━━━━━━━━━━━━━━━ 0s 732us/step - accuracy: 0.9602 - loss: 0.1072208/560 ━━━━━━━━━━━━━━━━━━━━ 0s 729us/step - accuracy: 0.9614 - loss: 0.1047275/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9623 - loss: 0.1024344/560 ━━━━━━━━━━━━━━━━━━━━ 0s 735us/step - accuracy: 0.9628 - loss: 0.1009410/560 ━━━━━━━━━━━━━━━━━━━━ 0s 739us/step - accuracy: 0.9629 - loss: 0.1001479/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9627 - loss: 0.1000549/560 ━━━━━━━━━━━━━━━━━━━━ 0s 736us/step - accuracy: 0.9627 - loss: 0.0999560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 963us/step - accuracy: 0.9626 - loss: 0.0999 - val_accuracy: 0.9586 - val_loss: 0.1229
Epoch 11/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 1.0000 - loss: 0.0652 66/560 ━━━━━━━━━━━━━━━━━━━━ 0s 775us/step - accuracy: 0.9585 - loss: 0.1081137/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9608 - loss: 0.1047204/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9619 - loss: 0.1023272/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9628 - loss: 0.1000339/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9634 - loss: 0.0985407/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9635 - loss: 0.0976471/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9634 - loss: 0.0975537/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9633 - loss: 0.0973560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 979us/step - accuracy: 0.9633 - loss: 0.0973 - val_accuracy: 0.9589 - val_loss: 0.1226
Epoch 12/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 1.0000 - loss: 0.0666 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.9588 - loss: 0.1053135/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9611 - loss: 0.1023197/560 ━━━━━━━━━━━━━━━━━━━━ 0s 772us/step - accuracy: 0.9622 - loss: 0.1002266/560 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.9632 - loss: 0.0979335/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9638 - loss: 0.0962404/560 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.9640 - loss: 0.0953474/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9639 - loss: 0.0952544/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9638 - loss: 0.0950560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 969us/step - accuracy: 0.9638 - loss: 0.0950 - val_accuracy: 0.9584 - val_loss: 0.1226
Epoch 13/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 1.0000 - loss: 0.0638 71/560 ━━━━━━━━━━━━━━━━━━━━ 0s 715us/step - accuracy: 0.9614 - loss: 0.1030140/560 ━━━━━━━━━━━━━━━━━━━━ 0s 720us/step - accuracy: 0.9635 - loss: 0.1000206/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9644 - loss: 0.0978273/560 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.9652 - loss: 0.0956341/560 ━━━━━━━━━━━━━━━━━━━━ 0s 742us/step - accuracy: 0.9656 - loss: 0.0941408/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9656 - loss: 0.0933476/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9654 - loss: 0.0931545/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9653 - loss: 0.0930560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 978us/step - accuracy: 0.9652 - loss: 0.0929 - val_accuracy: 0.9573 - val_loss: 0.1229
Epoch 14/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 13s 24ms/step - accuracy: 0.9688 - loss: 0.0629 66/560 ━━━━━━━━━━━━━━━━━━━━ 0s 769us/step - accuracy: 0.9585 - loss: 0.1013134/560 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.9619 - loss: 0.0984200/560 ━━━━━━━━━━━━━━━━━━━━ 0s 760us/step - accuracy: 0.9633 - loss: 0.0962268/560 ━━━━━━━━━━━━━━━━━━━━ 0s 755us/step - accuracy: 0.9644 - loss: 0.0939335/560 ━━━━━━━━━━━━━━━━━━━━ 0s 755us/step - accuracy: 0.9650 - loss: 0.0924404/560 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.9652 - loss: 0.0914473/560 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.9651 - loss: 0.0913541/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9650 - loss: 0.0911560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 979us/step - accuracy: 0.9650 - loss: 0.0911 - val_accuracy: 0.9573 - val_loss: 0.1232
Epoch 15/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 0.9688 - loss: 0.0616 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9585 - loss: 0.0996135/560 ━━━━━━━━━━━━━━━━━━━━ 0s 752us/step - accuracy: 0.9615 - loss: 0.0967205/560 ━━━━━━━━━━━━━━━━━━━━ 0s 744us/step - accuracy: 0.9630 - loss: 0.0943274/560 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.9642 - loss: 0.0920344/560 ━━━━━━━━━━━━━━━━━━━━ 0s 737us/step - accuracy: 0.9648 - loss: 0.0905410/560 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.9651 - loss: 0.0897479/560 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.9650 - loss: 0.0895540/560 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.9650 - loss: 0.0894560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 985us/step - accuracy: 0.9650 - loss: 0.0894 - val_accuracy: 0.9578 - val_loss: 0.1237
Epoch 16/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 0.9688 - loss: 0.0599 69/560 ━━━━━━━━━━━━━━━━━━━━ 0s 742us/step - accuracy: 0.9587 - loss: 0.0976138/560 ━━━━━━━━━━━━━━━━━━━━ 0s 738us/step - accuracy: 0.9614 - loss: 0.0949206/560 ━━━━━━━━━━━━━━━━━━━━ 0s 742us/step - accuracy: 0.9630 - loss: 0.0926271/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9643 - loss: 0.0905338/560 ━━━━━━━━━━━━━━━━━━━━ 0s 750us/step - accuracy: 0.9650 - loss: 0.0889407/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9653 - loss: 0.0881471/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9653 - loss: 0.0879542/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9654 - loss: 0.0877560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 978us/step - accuracy: 0.9653 - loss: 0.0877 - val_accuracy: 0.9571 - val_loss: 0.1242
Epoch 17/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 0.9688 - loss: 0.0624 69/560 ━━━━━━━━━━━━━━━━━━━━ 0s 750us/step - accuracy: 0.9597 - loss: 0.0963137/560 ━━━━━━━━━━━━━━━━━━━━ 0s 743us/step - accuracy: 0.9621 - loss: 0.0936205/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9635 - loss: 0.0913271/560 ━━━━━━━━━━━━━━━━━━━━ 0s 748us/step - accuracy: 0.9647 - loss: 0.0892340/560 ━━━━━━━━━━━━━━━━━━━━ 0s 743us/step - accuracy: 0.9654 - loss: 0.0876406/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9657 - loss: 0.0867470/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9657 - loss: 0.0866536/560 ━━━━━━━━━━━━━━━━━━━━ 0s 754us/step - accuracy: 0.9657 - loss: 0.0864560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 983us/step - accuracy: 0.9657 - loss: 0.0864 - val_accuracy: 0.9569 - val_loss: 0.1250
Epoch 18/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 0.9688 - loss: 0.0599 70/560 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.9615 - loss: 0.0945137/560 ━━━━━━━━━━━━━━━━━━━━ 0s 740us/step - accuracy: 0.9640 - loss: 0.0920208/560 ━━━━━━━━━━━━━━━━━━━━ 0s 728us/step - accuracy: 0.9654 - loss: 0.0896278/560 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.9664 - loss: 0.0874347/560 ━━━━━━━━━━━━━━━━━━━━ 0s 725us/step - accuracy: 0.9669 - loss: 0.0860418/560 ━━━━━━━━━━━━━━━━━━━━ 0s 724us/step - accuracy: 0.9670 - loss: 0.0852486/560 ━━━━━━━━━━━━━━━━━━━━ 0s 727us/step - accuracy: 0.9669 - loss: 0.0850554/560 ━━━━━━━━━━━━━━━━━━━━ 0s 730us/step - accuracy: 0.9669 - loss: 0.0849560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 961us/step - accuracy: 0.9669 - loss: 0.0849 - val_accuracy: 0.9566 - val_loss: 0.1258
Epoch 19/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 20ms/step - accuracy: 0.9688 - loss: 0.0604 68/560 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.9628 - loss: 0.0935134/560 ━━━━━━━━━━━━━━━━━━━━ 0s 764us/step - accuracy: 0.9649 - loss: 0.0910200/560 ━━━━━━━━━━━━━━━━━━━━ 0s 762us/step - accuracy: 0.9661 - loss: 0.0889264/560 ━━━━━━━━━━━━━━━━━━━━ 0s 770us/step - accuracy: 0.9670 - loss: 0.0867331/560 ━━━━━━━━━━━━━━━━━━━━ 0s 769us/step - accuracy: 0.9676 - loss: 0.0852399/560 ━━━━━━━━━━━━━━━━━━━━ 0s 763us/step - accuracy: 0.9678 - loss: 0.0842468/560 ━━━━━━━━━━━━━━━━━━━━ 0s 759us/step - accuracy: 0.9677 - loss: 0.0840538/560 ━━━━━━━━━━━━━━━━━━━━ 0s 755us/step - accuracy: 0.9677 - loss: 0.0838560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 992us/step - accuracy: 0.9676 - loss: 0.0837 - val_accuracy: 0.9564 - val_loss: 0.1266
Epoch 20/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 11s 21ms/step - accuracy: 0.9688 - loss: 0.0606 67/560 ━━━━━━━━━━━━━━━━━━━━ 0s 767us/step - accuracy: 0.9634 - loss: 0.0923136/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9657 - loss: 0.0898204/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9670 - loss: 0.0876273/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9679 - loss: 0.0854340/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9684 - loss: 0.0839407/560 ━━━━━━━━━━━━━━━━━━━━ 0s 745us/step - accuracy: 0.9686 - loss: 0.0831475/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9685 - loss: 0.0829543/560 ━━━━━━━━━━━━━━━━━━━━ 0s 746us/step - accuracy: 0.9684 - loss: 0.0827560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 971us/step - accuracy: 0.9684 - loss: 0.0827 - val_accuracy: 0.9564 - val_loss: 0.1268
Epoch 21/2000
1/560 ━━━━━━━━━━━━━━━━━━━━ 12s 22ms/step - accuracy: 0.9688 - loss: 0.0584 67/560 ━━━━━━━━━━━━━━━━━━━━ 0s 773us/step - accuracy: 0.9627 - loss: 0.0913132/560 ━━━━━━━━━━━━━━━━━━━━ 0s 771us/step - accuracy: 0.9649 - loss: 0.0890200/560 ━━━━━━━━━━━━━━━━━━━━ 0s 761us/step - accuracy: 0.9664 - loss: 0.0866265/560 ━━━━━━━━━━━━━━━━━━━━ 0s 763us/step - accuracy: 0.9676 - loss: 0.0845335/560 ━━━━━━━━━━━━━━━━━━━━ 0s 756us/step - accuracy: 0.9684 - loss: 0.0829403/560 ━━━━━━━━━━━━━━━━━━━━ 0s 753us/step - accuracy: 0.9687 - loss: 0.0820473/560 ━━━━━━━━━━━━━━━━━━━━ 0s 749us/step - accuracy: 0.9687 - loss: 0.0818542/560 ━━━━━━━━━━━━━━━━━━━━ 0s 747us/step - accuracy: 0.9687 - loss: 0.0816560/560 ━━━━━━━━━━━━━━━━━━━━ 1s 978us/step - accuracy: 0.9687 - loss: 0.0816 - val_accuracy: 0.9562 - val_loss: 0.1272
1/175 ━━━━━━━━━━━━━━━━━━━━ 5s 31ms/step119/175 ━━━━━━━━━━━━━━━━━━━━ 0s 431us/step175/175 ━━━━━━━━━━━━━━━━━━━━ 0s 603us/step
Large Dataset Neural Network
Accuracy: 0.9576256034328625
Precision: 0.956583427922815
Recall: 0.9756874095513748
F1: 0.9660409800831065
Difference of scores before and after adding data:
Accuracy Increase: 0.06017851637198535
Precision Increase: 0.04804891585209081
Recall Increase: 0.04564912204702398
F1 Increase: 0.046880333368280125
STRETCH QUESTION|TASK 3
Can you build a model that predicts the year a house was built? Explain the model and the evaluation metrics you would use to determine if the model is good.
I decided to use the XGBRegressor for this problem because it is fairly easy to build and the XGBoost seemed to perform about as well as the Random Forest in the classification problem, so I figured it would do fairly well on the regression problem as well. I decided to do some hyperparameter tuning with a Grid Search (I commented out the GridSearchCV code because it takes a long time to run and I only needed to run it once). After that I got an MSE of 12.6 (ish) which suggests that the model was predicting within 12.6 years of the actual value (so if it guessed a certain home was built in 1983, the actual value could be +-12.6 years of that). It performed with an MAE of 6.9 (ish), which means that on average the model predicted 6.9 years off of the actual value. Lastly the R^2 value is difficult to explain, but put simply it is a measure of how well the model explains the variance in the yrbuilt values. So 88.5% of the variation in the model is explained by the model while the remaining 11.5% variance is due to unexplained factors (a higher R^2 score is better, with 1 being the best).
Show the code
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
X_reg = large_df.drop(columns = ['parcel' , 'yrbuilt' , 'before1980' ])
y_reg = large_df.yrbuilt
X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split(X_reg, y_reg, test_size= 0.2 , random_state= 42 )
# param_grid = {
# 'n_estimators': [100, 300, 500],
# 'learning_rate': [0.01, 0.1, 0.2],
# 'max_depth': [3, 5, 7],
# 'subsample': [0.7, 0.8, 1.0],
# 'colsample_bytree': [0.7, 0.8, 1.0]
# }
# search = GridSearchCV(XGBRegressor(), param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1)
# search.fit(X_reg_train, y_reg_train)
# best_model = search.best_estimator_
# print("Best parameters:", search.best_params_)
regr = XGBRegressor(colsample_bytree= 0.7 ,learning_rate= 0.2 ,max_depth= 7 ,n_estimators= 500 ,subsample= 0.8 )
regr.fit(X_reg_train, y_reg_train)
reg_pred = regr.predict(X_reg_test)
rmse = np.sqrt(mean_squared_error(y_reg_test, reg_pred))
mae = mean_absolute_error(y_reg_test, reg_pred)
r2 = r2_score(y_reg_test, reg_pred)
print (f"RMSE: { rmse} " )
print (f"MAE: { mae} " )
print (f"R^2: { r2} " )
RMSE: 12.637653317450606
MAE: 6.895981788635254
R^2: 0.8850180506706238